Current Issue : October - December Volume : 2016 Issue Number : 4 Articles : 5 Articles
In this paper, an authenticate live 3D point cloud video streaming system is presented,\nusing a low cost 3D sensor camera, the Microsoft Kinect. The proposed system is implemented on\na client-server network infrastructure. The live 3D video is captured from the Kinect RGB-D sensor,\nthen a 3D point cloud is generated and processed. Filtering and compression are used to handle the\nspatial and temporal redundancies. A color histogram based conditional filter is designed to reduce\nthe color information for each frame based on the mean and standard deviation. In addition to the\ndesigned filter, a statistical outlier removal filter is used. A certificate-based authentication is used\nwhere the client will verify the identity of the server during the handshake process. The processed\n3D point cloud video is live streamed over a TCP/IP protocol to the client. The system is evaluated in\nterms of: compression ratio, total bytes per points, peak signal to noise ratio (PSNR), and Structural\nSimilarity (SSIM) index. The experimental results demonstrate that the proposed video streaming\nsystem have a best case with SSIM 0.859, PSNR of 26.6 dB and with average compression ratio\nof 8.42 while the best average compression ratio case is about 15.43 with PSNR 18.5128 dB of and\nSSIM 0.7936....
We present a new type of local image descriptor which yields binary patterns from small\nimage patches. For the application to fingerprint liveness detection, we achieve rotation\ninvariant image patches by taking the fingerprint segmentation and orientation field into\naccount. We compute the discrete cosine transform (DCT) for these rotation invariant\npatches and attain binary patterns by comparing pairs of two DCT coefficients. These patterns\nare summarized into one or more histograms per image. Each histogram comprises\nthe relative frequencies of pattern occurrences. Multiple histograms are concatenated and\nthe resulting feature vector is used for image classification. We name this novel type of\ndescriptor convolution comparison pattern (CCP). Experimental results show the usefulness\nof the proposed CCP descriptor for fingerprint liveness detection. CCP outperforms\nother local image descriptors such as LBP, LPQ and WLD on the LivDet 2013 benchmark.\nThe CCP descriptor is a general type of local image descriptor which we expect to prove\nuseful in areas beyond fingerprint liveness detection such as biological and medical image\nprocessing, texture recognition, face recognition and iris recognition, liveness detection for\nface and iris images, and machine vision for surface inspection and material classification....
The ability of light gathering of plenoptic\ncamera opens up new opportunities for a wide range\nof computer vision applications. An efficient and\naccurate method to calibrate plenoptic camera is\ncrucial for its development. This paper describes a 10-\nintrinsic-parameter model for focused plenoptic camera\nwith misalignment. By exploiting the relationship\nbetween the raw image features and the depthââ?¬â??scale\ninformation in the scene, we propose to estimate\nthe intrinsic parameters from raw images directly,\nwith a parallel biplanar board which provides depth\nprior. The proposed method enables an accurate\ndecoding of light field on both angular and positional\ninformation, and guarantees a unique solution for the 10\nintrinsic parameters in geometry. Experiments on both\nsimulation and real scene data validate the performance\nof the proposed calibration method....
Regular monitoring and assessment of crops is one of the keys to optimal crop production.\nThis research presents the development of a monitoring system called the Crop Monitoring and\nAssessment Platform (C-MAP). The C-MAP is composed of an image acquisition unit which is an\noff-the-shelf unmanned aerial vehicle (UAV) equipped with a multispectral camera (near-infrared,\ngreen, blue), and an image processing and analysis component. The experimental apple orchard\nat the Parma Research and Extension Center of the University of Idaho was used as the target for\nmonitoring and evaluation. Five experimental rows of the orchard were randomly treated with five\ndifferent irrigation methods. An image processing algorithm to detect individual trees was developed\nto facilitate the analysis of the rows and it was able to detect over 90% of the trees. The image\nanalysis of the experimental rows was based on vegetation indices and results showed that there was\na significant difference in the Enhanced Normalized Difference Vegetation Index (ENDVI) among the\nfive different irrigation methods. This demonstrates that the C-MAP has very good potential as a\nmonitoring tool for orchard management....
The deburring processes of parts with complex geometries usually present many challenges\nto be automated. This paper outlines the machine vision techniques involved in the design and set\nup of an automated adaptive cognitive robotic system for laser deburring of metal casting complex\n3D high quality parts. To carry out deburring process operations of the parts autonomously, 3D\nmachine vision techniques have been used for different purposes, explained in this paper. These\nmachine vision algorithms used along with industrial robots and a high tech laser head, make a fully\nautomated deburring process possible. This setup could potentially be applied to medium sized\nparts of different light casting alloys (Mg, AlZn, etc.)....
Loading....